Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 120: 5-8, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31607596

RESUMO

As humans go through life sifting vast quantities of complex information, we extract knowledge from settings that are more ambiguous than our early homes and classrooms. Learning from experience in an individual's unique context generally improves expert performance, despite the risks inherent in brain dynamics that can transform previously reliable expectations. Designers of twenty-first century technologies face the challenges and responsibilities posed by fielded systems that continue to learn on their own. The neural model Self-supervised ART, which can acquire significantly new knowledge in unpredictable contexts, is an example of one such system.


Assuntos
Aprendizado de Máquina Supervisionado/tendências , Aprendizado de Máquina não Supervisionado/tendências , História do Século XX , História do Século XXI , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado/história , Aprendizado de Máquina não Supervisionado/história
2.
Neural Netw ; 118: 204-207, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31295692

RESUMO

As humans go through life sifting vast quantities of complex information, we extract knowledge from settings that are more ambiguous than our early homes and classrooms. Learning from experience in an individual's unique context generally improves expert performance, despite the risks inherent in brain dynamics that can transform previously reliable expectations. Designers of twenty-first century technologies face the challenges and responsibilities posed by fielded systems that continue to learn on their own. The neural model Self-supervised ART, which can acquire significantly new knowledge in unpredictable contexts, is an example of one such system.


Assuntos
Redes Neurais de Computação , Aprendizado de Máquina Supervisionado/tendências , Encéfalo/fisiologia , Previsões , Humanos
3.
Wiley Interdiscip Rev Cogn Sci ; 4(6): 707-719, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26304273

RESUMO

Three computational examples illustrate how cognitive science can introduce new approaches to the analysis of large datasets. The first example addresses the question: how can a neural system learning from one example at a time absorb information that is inconsistent but correct, as when a family pet is called Spot and dog and animal, while rejecting similar incorrect information, as when the same pet is called wolf? How does this system transform such scattered information into the knowledge that dogs are animals, but not conversely? The second example asks: how can a real-time system, initially trained with a few labeled examples and a limited feature set, continue to learn from experience when confronted with oceans of additional information, without eroding reliable early memories? How can such individual systems adapt to their unique application contexts? The third example asks: how can a neural system that has made an error refocus attention on environmental features that it had initially ignored? Three models that address these questions, each based on the distributed adaptive resonance theory (dART) neural network, are applied to a spatial testbed created from multimodal remotely sensed data. The article summarizes key design elements of ART models, and provides links to open-source code for each system and the testbed dataset. WIREs Cogn Sci 2013, 4:707-719. doi: 10.1002/wcs.1260 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.

4.
Neural Netw ; 37: 93-102, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23031711

RESUMO

The DISCOV (DImensionless Shunting COlor Vision) system models a cascade of primate color vision neurons: retinal ganglion, thalamic single opponent, and cortical double opponent. A unified model derived from psychophysical axioms produces transparent network dynamics and principled parameter settings. DISCOV fits an array of physiological data for each cell type, and makes testable experimental predictions. Binary DISCOV augments an earlier version of the model to achieve stable computations for spatial data analysis. The model is described in terms of RGB images, but inputs may consist of any number of spatially defined components. System dynamics are derived using algebraic computations, and robust parameter ranges that meet experimental data are fully specified. Assuming default values, the only free parameter for the user to specify is the spatial scale. Multi-scale analysis accommodates items of various sizes and perspective. Image inputs are first processed by complement coding, which produces an ON channel stream and an OFF channel stream for each component. Subsequent computations are on-center/off-surround, with the OFF channel replacing the off-center/on-surround fields of other models. Together with an orientation filter, DISCOV provides feature input vectors for an integrated recognition system. The development of DISCOV models is being carried out in the context of a large-scale research program that is integrating cognitive and neural systems derived from analyses of vision and recognition to produce both biological models and technological applications.


Assuntos
Percepção de Cores/fisiologia , Visão de Cores/fisiologia , Modelos Neurológicos , Reconhecimento Visual de Modelos/fisiologia , Percepção Espacial/fisiologia , Animais , Percepção de Forma/fisiologia , Humanos , Estimulação Luminosa/métodos , Retina/fisiologia , Vias Visuais/fisiologia
5.
Neural Netw ; 25(1): 161-77, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21982690

RESUMO

The self-organizing ARTMAP rule discovery (SOARD) system derives relationships among recognition classes during online learning. SOARD training on input/output pairs produces the basic competence of direct recognition of individual class labels for new test inputs. As a typical supervised system, it learns many-to-one maps, which recognize different inputs (Spot, Rex) as belonging to one class (dog). As an ARTMAP system, it also learns one-to-many maps, allowing a given input (Spot) to learn a new class (animal) without forgetting its previously learned output (dog), even as it corrects erroneous predictions (cat). As it learns individual input/output class predictions, SOARD employs distributed code representations that support online rule discovery. When the input Spot activates the classes dogand animal, confidence in the rule dog→animal begins to grow. When other inputs simultaneously activate classes cat and animal, confidence in the converse rule, animal→dog, decreases. Confidence in a self-organized rule is encoded as the weight in a path from one class node to the other. An experience-based mechanism modulates the rate of rule learning, to keep inaccurate predictions from creating false rules during early learning. Rules may be excitatory or inhibitory so that rule-based activation can add missing classes and remove incorrect ones. SOARD rule activation also enables inputs to learn to make direct predictions of output classes that they have never experienced during supervised training. When input Rex activates its learned class dog, the rule dog→animal indirectly activates the output class animal. The newly activated class serves as a teaching signal which allows input Rex to learn direct activation of the output class animal. Simulations using small-scale and large-scale datasets demonstrate functional properties of the SOARD system in both spatial and time-series domains.


Assuntos
Bases de Dados Factuais/classificação , Bases de Dados Factuais/estatística & dados numéricos , Redes Neurais de Computação , Animais , Interpretação Estatística de Dados , Projetos Piloto
6.
Neural Netw ; 24(2): 208-16, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21094022

RESUMO

CONFIGR-STARS, a new methodology based on a model of the human visual system, is developed for registration of star images. The algorithm first applies CONFIGR, a neural model that connects sparse and noisy image components. CONFIGR produces a web of connections between stars in a reference starmap or in a test patch of unknown location. CONFIGR-STARS splits the resulting, typically highly connected, web into clusters, or "constellations". Cluster geometry is encoded as a signature vector that records edge lengths and angles relative to the cluster's baseline edge. The location of a test patch cluster is identified by comparing its signature to signatures in the codebook of a reference starmap, where cluster locations are known. Simulations demonstrate robust performance in spite of image perturbations and omissions, and across starmaps from different sources and seasons. Further studies would test CONFIGR-STARS and algorithm variations applied to very large starmaps and to other technologies that may employ geometric signatures. Open-source code, data, and demos are available from http://techlab.bu.edu/STARS/.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Visão Ocular , Análise por Conglomerados , Humanos , Visão Ocular/fisiologia
7.
Neural Netw ; 23(2): 265-82, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19699053

RESUMO

Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semi-supervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative low-dimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.eu.edu/SSART/.


Assuntos
Aprendizagem , Redes Neurais de Computação , Envelhecimento , Algoritmos , Animais , Boston , Comportamento de Escolha , Simulação por Computador , Bases de Dados Factuais , Diagnóstico por Computador/métodos , Meio Ambiente , Lógica Fuzzy , Geografia , Humanos , Hipotermia/diagnóstico , Internet , Memória , Choque/diagnóstico , Software , Temperatura
8.
Neural Netw ; 23(3): 435-51, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19811892

RESUMO

Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/.


Assuntos
Atenção , Redes Neurais de Computação , Simulação por Computador , Bases de Dados Factuais , Lógica Fuzzy , Internet , Memória
9.
Neural Netw ; 20(10): 1109-31, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18024082

RESUMO

CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the "early vision" stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once the pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.


Assuntos
Algoritmos , Simulação por Computador , Percepção de Forma , Modelos Neurológicos , Reconhecimento Visual de Modelos/fisiologia , Visão Ocular , Humanos , Estimulação Luminosa
10.
Vision Res ; 47(25): 3173-211, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17904187

RESUMO

A neural model called dARTEX is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model unifies five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits the Ben-Shahar and Zucker [Ben-Shahar, O. & Zucker, S. (2004). Sensitivity to curvatures in orientation-based texture segmentation. Vision Research, 44, 257-277] human psychophysical data on orientation-based textures. Surface-based attentional shrouds improve texture learning and classification: Brodatz texture classification rate varies from 95.1% to 98.6% with correct attention, and from 74.1% to 75.5% without attention. Object boundary output of the model in response to photographic images is compared to computer vision algorithms and human segmentations.


Assuntos
Atenção , Aprendizagem por Discriminação/fisiologia , Percepção de Forma/fisiologia , Modelos Psicológicos , Córtex Visual/fisiologia , Algoritmos , Simulação por Computador , Sensibilidades de Contraste/fisiologia , Humanos , Psicometria , Psicofísica , Vias Visuais/fisiologia
11.
Neural Netw ; 16(7): 1075-89, 2003 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-14692640

RESUMO

The Sensor Exploitation Group of MIT Lincoln Laboratory incorporated an early version of the ARTMAP neural network as the recognition engine of a hierarchical system for fusion and data mining of registered geospatial images. The Lincoln Lab system has been successfully fielded, but is limited to target/non-target identifications and does not produce whole maps. Procedures defined here extend these capabilities by means of a mapping method that learns to identify and distribute arbitrarily many target classes. This new spatial data mining system is designed particularly to cope with the highly skewed class distributions of typical mapping problems. Specification of canonical algorithms and a benchmark testbed has enabled the evaluation of candidate recognition networks as well as pre- and post-processing and feature selection options. The resulting mapping methodology sets a standard for a variety of spatial data mining tasks. In particular, training pixels are drawn from a region that is spatially distinct from the mapped region, which could feature an output class mix that is substantially different from that of the training set. The system recognition component, default ARTMAP, with its fully specified set of canonical parameter values, has become the a priori system of choice among this family of neural networks for a wide variety of applications.


Assuntos
Simulação por Computador , Mapas como Assunto , Redes Neurais de Computação , Software
12.
Neural Comput ; 14(4): 873-88, 2002 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-11936965

RESUMO

Markram and Tsodyks, by showing that the elevated synaptic efficacy observed with single-pulse long-term potentiation (LTP) measurements disappears with higher-frequency test pulses, have critically challenged the conventional assumption that LTP reflects a general gain increase. This observed change in frequency dependence during synaptic potentiation is called redistribution of synaptic efficacy (RSE). RSE is here seen as the local realization of a global design principle in a neural network for pattern coding. The underlying computational model posits an adaptive threshold rather than a multiplicative weight as the elementary unit of long-term memory. A distributed instar learning law allows thresholds to increase only monotonically, but adaptation has a bidirectional effect on the model postsynaptic potential. At each synapse, threshold increases implement pattern selectivity via a frequency-dependent signal component, while a complementary frequency-independent component nonspecifically strengthens the path. This synaptic balance produces changes in frequency dependence that are robustly similar to those observed by Markram and Tsodyks. The network design therefore suggests a functional purpose for RSE, which, by helping to bound total memory change, supports a distributed coding scheme that is stable with fast as well as slow learning. Multiplicative weights have served as a cornerstone for models of physiological data and neural systems for decades. Although the model discussed here does not implement detailed physiology of synaptic transmission, its new learning laws operate in a network architecture that suggests how recently discovered synaptic computations such as RSE may help produce new network capabilities such as learning that is fast, stable, and distributed.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Sinapses/fisiologia , Algoritmos , Potenciação de Longa Duração , Modelos Neurológicos , Receptores Pré-Sinápticos/fisiologia
13.
Neural Netw ; 11(5): 793-813, 1998 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-12662783

RESUMO

Distributed coding at the hidden layer of a multi-layer perceptron (MLP) endows the network with memory compression and noise tolerance capabilities. However, an MLP typically requires slow off-line learning to avoid catastrophic forgetting in an open input environment. An adaptive resonance theory (ART) model is designed to guarantee stable memories even with fast on-line learning. However, ART stability typically requires winner-take-all coding, which may cause category proliferation in a noisy input environment. Distributed ARTMAP (dARTMAP) seeks to combine the computational advantages of MLP and ART systems in a real-time neural network for supervised learning. An implementation algorithm here describes one class of dARTMAP networks. This system incorporates elements of the unsupervised dART model, as well as new features, including a content-addressable memory (CAM) rule for improved contrast control at the coding field. A dARTMAP system reduces to fuzzy ARTMAP when coding is winner-take-all. Simulations show that dARTMAP retains fuzzy ARTMAP accuracy while significantly improving memory compression.

14.
Neural Netw ; 10(8): 1473-1494, 1997 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-12662488

RESUMO

A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...